Key Points:
- Transthoracic echocardiography is foundational to diagnosis and evaluation of myriad conditions, but image interpretation and results reporting depends on manual reporting
- As opposed to previous applications of artificial intelligence (AI) to TTE that have focused on individual views and medical conditions, PanEcho is a view-agnostic, multi-task AI model trained on 1.23 million echo videos that can perform 39 different TTE reporting tasks
- PanEcho demonstrated a median area under the receiver operating characteristic curve (AUC) of 0.91 across 18 different classification tasks as well as minimal mean absolute error (MAE) when estimating continuous parameters, ability to identify useful views for a specific task, and translation of accurate interpretation to novel pediatric populations
- Pan-Echo represents a view-agnostic, multi-task, externally validated, open-source AI model that can accurately interpret images and produce reports across a variety of views and patient populations
Transthoracic echocardiography (TTE) is key to diagnosis and evaluation of myriad conditions. However, interpretation of imaging and reporting of findings relies on manual reporting. Among varied disciplines, artificial intelligence (AI) is said to automate processes, theoretically increasing accuracy and efficiency exponentially. Current applications of AI to TTE have traditionally focused on individual echo views and medical conditions. Here, the investigators developed PanEcho, a view-agnostic, multi-task AI model that automates TTE interpretation across different views and acquisitions for all key echocardiographic metrics and notable findings.
The investigators developed PanEcho using 1.23 million echocardiographic videos from 33,927 TTE studies performed at a health system in New England from 2016 to 2022. The model can perform 39 different TTE reporting tasks, including myocardial and valvular structure and function from any parasternal, apical, and subcostal views, including B-mode and color Doppler. PanEcho has an image encoder to learn spatial features, a Transformer (a neural network converting input to output) for temporal modeling, and task-specific output heads. In this study, PanEcho was evaluated on a distinct patient cohort from July to December 2022 as well as two external California-based cohorts from 2008 to 2020 to assess its diagnostic performance and ability to serve as a foundation model for fine-tuning in new domains.
PanEcho demonstrated a median area under the receiver operating characteristic curve (AUC) of 0.91 across 18 different classification tasks. Selected results include detection of severe aortic stenosis (AS) with 0.99 AUC, moderate-severe left ventricular (LV) systolic dysfunction with 0.98 AUC, moderate-severe LV dilation with 0.95 AUC, among others. Furthermore, the model estimates continuous metrics with a medial normalized mean absolute error (MAE) of 0.13 across 21 tasks (e.g. defining LV ejection fraction (EF) with 4.4% MAE). PanEcho’s multi-view evaluation demonstrates its ability to determine which views are more informative for each task. In addition, PanEcho’s learned representations transfer efficiently to LVEF estimation in pediatric population, outperforming current approaches (3.9% MAE vs 4.5% for the next-best).
Ultimately, the investigators conclude that PanEcho is a view-agnostic, multi-task, externally validated, open-source AI model that can accurately interpret images and produce reports across a variety of views and patient populations. Currently, PanEcho is limited by retrospective validation in previously acquired datasets. However, next steps include prospective validation in real-world patient care settings as well as evaluation of its utility among settings utilizing portable and point-of-care ultrasound (POCUS).